We study the following independence testing problem: given access to samples from a distribution $P$ over $\{0,1\}^n$, decide whether $P$ is a product distribution or whether it is $\varepsilon$-far in total variation distance from any product distribution. For arbitrary distributions, this problem requires $\exp(n)$ samples. We show in this work that if $P$ has a sparse structure, then in fact only linearly many samples are required. Specifically, if $P$ is Markov with respect to a Bayesian network whose underlying DAG has in-degree bounded by $d$, then $\tilde{\Theta}(2^{d/2}\cdot n/\varepsilon^2)$ samples are necessary and sufficient for independence testing.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
There has been a concurrent significant improvement in the medical images used to facilitate diagnosis and the performance of machine learning techniques to perform tasks such as classification, detection, and segmentation in recent years. As a result, a rapid increase in the usage of such systems can be observed in the healthcare industry, for instance in the form of medical image classification systems, where these models have achieved diagnostic parity with human physicians. One such application where this can be observed is in computer vision tasks such as the classification of skin lesions in dermatoscopic images. However, as stakeholders in the healthcare industry, such as insurance companies, continue to invest extensively in machine learning infrastructure, it becomes increasingly important to understand the vulnerabilities in such systems. Due to the highly critical nature of the tasks being carried out by these machine learning models, it is necessary to analyze techniques that could be used to take advantage of these vulnerabilities and methods to defend against them. This paper explores common adversarial attack techniques. The Fast Sign Gradient Method and Projected Descent Gradient are used against a Convolutional Neural Network trained to classify dermatoscopic images of skin lesions. Following that, it also discusses one of the most popular adversarial defense techniques, adversarial training. The performance of the model that has been trained on adversarial examples is then tested against the previously mentioned attacks, and recommendations to improve neural networks robustness are thus provided based on the results of the experiment.
translated by 谷歌翻译
Euclidean geometry is among the earliest forms of mathematical thinking. While the geometric primitives underlying its constructions, such as perfect lines and circles, do not often occur in the natural world, humans rarely struggle to perceive and reason with them. Will computer vision models trained on natural images show the same sensitivity to Euclidean geometry? Here we explore these questions by studying few-shot generalization in the universe of Euclidean geometry constructions. We introduce Geoclidean, a domain-specific language for Euclidean geometry, and use it to generate two datasets of geometric concept learning tasks for benchmarking generalization judgements of humans and machines. We find that humans are indeed sensitive to Euclidean geometry and generalize strongly from a few visual examples of a geometric concept. In contrast, low-level and high-level visual features from standard computer vision models pretrained on natural images do not support correct generalization. Thus Geoclidean represents a novel few-shot generalization benchmark for geometric concept learning, where the performance of humans and of AI models diverge. The Geoclidean framework and dataset are publicly available for download.
translated by 谷歌翻译
尽管在利用深度学习来自动化胸部X光片解释和疾病诊断任务方面取得了进展,但顺序胸部X射线(CXR)之间的变化受到了有限的关注。监测通过胸部成像可视化的病理的进展在解剖运动估计和图像注册中构成了几个挑战,即在空间上对齐这两个图像并在变化检测中对时间动力学进行建模。在这项工作中,我们提出了Chexrelnet,这是一种可以跟踪两个CXR之间纵向病理关系的神经模型。Chexrelnet结合了局部和全球视觉特征,利用图像间和图像内的解剖信息,并学习解剖区域属性之间的依赖性,以准确预测一对CXR的疾病变化。与基准相比,胸部成像组数据集的实验结果显示下游性能提高。代码可从https://github.com/plan-lab/chexrelnet获得
translated by 谷歌翻译
社交网络的快速发展以及互联网可用性的便利性加剧了虚假新闻和社交媒体网站上的谣言的泛滥。在共同19的流行病中,这种误导性信息通过使人们的身心生命处于危险之中,从而加剧了这种情况。为了限制这种不准确性的传播,从在线平台上确定虚假新闻可能是第一步。在这项研究中,作者通过实施了五个基于变压器的模型,例如Bert,Bert没有LSTM,Albert,Roberta和Bert&Albert的混合体,以检测Internet的Covid 19欺诈新闻。Covid 19假新闻数据集已用于培训和测试模型。在所有这些模型中,Roberta模型的性能优于其他模型,通过在真实和虚假类中获得0.98的F1分数。
translated by 谷歌翻译
现在众所周知,神经网络对其预测的信心很高,导致校准不良。弥补这一点的最常见的事后方法是执行温度缩放,这可以通过将逻辑缩放为固定值来调整任何输入的预测的信心。尽管这种方法通常会改善整个测试数据集中的平均校准,但无论给定输入的分类是否正确还是不正确,这种改进通常会降低预测的个人信心。有了这种见解,我们将方法基于这样的观察结果,即不同的样品通过不同的量导致校准误差,有些人需要提高其信心,而另一些则需要减少它。因此,对于每个输入,我们建议预测不同的温度值,从而使我们能够调整较细性的置信度和准确性之间的不匹配。此外,我们观察到了OOD检测结果的改善,还可以提取数据点的硬度概念。我们的方法是在事后应用的,因此使用很少的计算时间和可忽略不计的记忆足迹,并应用于现成的预训练的分类器。我们使用CIFAR10/100和TINY-IMAGENET数据集对RESNET50和WIDERESNET28-10架构进行测试,这表明在整个测试集中产生每数据点温度也有益于预期的校准误差。代码可在以下网址获得:https://github.com/thwjoy/adats。
translated by 谷歌翻译
储层计算机(RCS)是所有神经网络训练最快的计算机之一,尤其是当它们与其他经常性神经网络进行比较时。 RC具有此优势,同时仍能很好地处理顺序数据。但是,由于该模型对其超参数(HPS)的敏感性,RC的采用率滞后于其他神经网络模型。文献中缺少一个自动调谐这些参数的现代统一软件包。手动调整这些数字非常困难,传统网格搜索方法的成本呈指数增长,随着所考虑的HP数量,劝阻RC的使用并限制了可以设计的RC模型的复杂性。我们通过引入RCTORCH来解决这些问题,Rctorch是一种基于Pytorch的RC神经网络软件包,具有自动HP调整。在本文中,我们通过使用它来预测不同力的驱动摆的复杂动力学来证明rctorch的实用性。这项工作包括编码示例。示例Python Jupyter笔记本可以在我们的GitHub存储库https://github.com/blindedjoy/rctorch上找到,可以在https://rctorch.readthedocs.io/上找到文档。
translated by 谷歌翻译
神经画家是一类遵循GaN框架的模型,以产生笔触,然后组成创建绘画。GAN是AI艺术的伟大生成型号,但众所周知,他们难以训练。为了克服GaN的局限性并加快神经画家培训,我们将学习从日期从几天减少到几个小时的过程中施加转移,同时在所产生的最终绘画中实现相同的视觉美学水平。我们报告了我们的方法并导致这项工作。
translated by 谷歌翻译
从经典动力学系统到量子力学的许多领域,在许多领域的进步核心,有效,准确地求解微分方程。人们对使用物理知识的神经网络(PINN)来解决此类问题,这引起了人们的兴趣,因为它们比传统的数值方法提供了许多好处。尽管它们在求解微分方程方面的潜在好处,但仍在探索转移学习。在这项研究中,我们提出了转移学习PINN的一般框架,该框架对普通和部分微分方程的线性系统进行了单次推断。这意味着,可以在不重新培训整个网络的情况下即时获得许多未知微分方程的方法。我们通过解决了几个现实世界中的问题,例如一阶线性普通方程,泊松方程以及时间依赖时间依赖的schrodinger复合物配合物部分差分方程来证明拟议的深度学习方法的功效。
translated by 谷歌翻译